12 research outputs found

    A Pervasive Computational Intelligence based Cognitive Security Co-design Framework for Hype-connected Embedded Industrial IoT

    Get PDF
    The amplified connectivity of routine IoT entities can expose various security trajectories for cybercriminals to execute malevolent attacks. These dangers are even amplified by the source limitations and heterogeneity of low-budget IoT/IIoT nodes, which create existing multitude-centered and fixed perimeter-oriented security tools inappropriate for vibrant IoT settings. The offered emulation assessment exemplifies the remunerations of implementing context aware co-design oriented cognitive security method in assimilated IIoT settings and delivers exciting understandings in the strategy execution to drive forthcoming study. The innovative features of our system is in its capability to get by with irregular system connectivity as well as node limitations in terms of scares computational ability, limited buffer (at edge node), and finite energy. Based on real-time analytical data, projected scheme select the paramount probable end-to-end security system possibility that ties with an agreed set of node constraints. The paper achieves its goals by recognizing some gaps in the security explicit to node subclass that is vital to our system’s operations

    Development of a Hybrid Algorithm for efficient Task Scheduling in Cloud Computing environment using Artificial Intelligence

    Get PDF
    Cloud computing is developing as a platform for next generation systems where users can pay as they use facilities of cloud computing like any other utilities. Cloud environment involves a set of virtual machines, which share the same computation facility and storage. Due to rapid rise in demand for cloud computing services several algorithms are being developed and experimented by the researchers in order to enhance the task scheduling process of the machines thereby offering optimal solution to the users by which the users can process the maximum number of tasks through minimal utilization of the resources. Task scheduling denotes a set of policies to regulate the task processed by a system. Virtual machine scheduling is essential for effective operations in distributed environment. The aim of this paper is to achieve efficient task scheduling of virtual machines, this study proposes a hybrid algorithm through integrating two prominent heuristic algorithms namely the BAT Algorithm and the Ant Colony Optimization (ACO) algorithm in order to optimize the virtual machine scheduling process. The performance evaluation of the three algorithms (BAT, ACO and Hybrid) reveal that the hybrid algorithm performs better when compared with that of the other two algorithms

    Privacy Preserving Large Language Models: ChatGPT Case Study Based Vision and Framework

    Full text link
    The generative Artificial Intelligence (AI) tools based on Large Language Models (LLMs) use billions of parameters to extensively analyse large datasets and extract critical private information such as, context, specific details, identifying information etc. This have raised serious threats to user privacy and reluctance to use such tools. This article proposes the conceptual model called PrivChatGPT, a privacy-preserving model for LLMs that consists of two main components i.e., preserving user privacy during the data curation/pre-processing together with preserving private context and the private training process for large-scale data. To demonstrate its applicability, we show how a private mechanism could be integrated into the existing model for training LLMs to protect user privacy; specifically, we employed differential privacy and private training using Reinforcement Learning (RL). We measure the privacy loss and evaluate the measure of uncertainty or randomness once differential privacy is applied. It further recursively evaluates the level of privacy guarantees and the measure of uncertainty of public database and resources, during each update when new information is added for training purposes. To critically evaluate the use of differential privacy for private LLMs, we hypothetically compared other mechanisms e..g, Blockchain, private information retrieval, randomisation, for various performance measures such as the model performance and accuracy, computational complexity, privacy vs. utility etc. We conclude that differential privacy, randomisation, and obfuscation can impact utility and performance of trained models, conversely, the use of ToR, Blockchain, and PIR may introduce additional computational complexity and high training latency. We believe that the proposed model could be used as a benchmark for proposing privacy preserving LLMs for generative AI tools

    Mobility of Internet of Things and Fog Computing: Serious Concerns and Future Directions

    No full text
    IoT devised the modern world of technology into a world where different objects are joining the global network, exchanging, storing and processing the data using different surrounding environments and actively intercede. Enormous number of services can be visualized and put into reality by using the basic concepts of IoT. Since IoT devices are resource constrained, the concept of edge computing also termed as fog computing was proposed to assist IoT devices in handling and delivery information. The centralized cloud is not replaced by fog but enhances its functionality, accessibility and reliability in many means by distributing its principles and technologies through the CoT continuation, specifically at edge of network. The nearness of the fog and IoT nodes enables many distinctive features that should be available and protected always even during the movement of IoT devices through different places. This article discusses the main challenges by analyzing the concept of providing mobility support in a fog environment

    Low-Complexity One-Dimensional Parallel Semi-Systolic Structure for Field Montgomery Multiplication Algorithm Perfect for Small IoT Edge Nodes

    No full text
    The use of IoT technology in several applications is hampered by security and privacy concerns with IoT edge nodes. Security flaws can only be resolved by implementing cryptographic protocols on these nodes. The resource constraints of the edge nodes make it extremely difficult to implement these protocols. The majority of cryptographic protocols’ fundamental operation is finite-field multiplication, and their performance is significantly impacted by their effective implementation. Therefore, this work mainly focuses on implementing low-area with low-energy and high-speed one-dimensional bit-parallel semi-systolic multiplier for the Montgomery multiplication algorithm. The space and delay complexity analysis of the proposed multiplier structure reveals that the proposed design has a significant reduction in delay and a marginal reduction in the area when compared to the competitive one-dimensional multipliers. The obtained ASIC synthesis report demonstrates that the suggested multiplier architecture saves a marginal amount of space as well as a significant amount of time, area–delay product (ADP), and power–delay product (PDP) when compared to the competitive ones. The obtained results indicate that the proposed multiplier layout is very appropriate for use in devices with limited resources such as IoT edge nodes and tiny embedded devices

    Retaliation against Ransomware in Cloud-Enabled PureOS System

    No full text
    Ransomware is malicious software that encrypts data before demanding payment to unlock them. The majority of ransomware variants use nearly identical command and control (C&C) servers but with minor upgrades. There are numerous variations of ransomware, each of which can encrypt either the entire computer system or specific files. Malicious software needs to infiltrate a system before it can do any real damage. Manually inspecting all potentially malicious file types is a time-consuming and resource-intensive requirement of conventional security software. Using established metrics, this research delves into the complex issues of identifying and preventing ransomware. On the basis of real-world malware samples, we created a parameterized categorization strategy for functional classes and suggestive features. We also furnished a set of criteria that highlights the most commonly featured criteria and investigated both behavior and insights. We used a distinct operating system and specific cloud platform to facilitate remote access and collaboration on files throughout the entire operational experimental infrastructure. With the help of our proposed ransomware detection mechanism, we were able to effectively recognize and prevent both state-of-art and modified ransomware anomalies. Aggregated log revealed a consistent but satisfactory detection rate at 89%. To the best of our knowledge, no research exists that has investigated the ransomware detection and impact of ransomware for PureOS, which offers a unique platform for PC, mobile phones, and resource intensive IoT (Internet of Things) devices

    Development of an IoT-Based Solution Incorporating Biofeedback and Fuzzy Logic Control for Elbow Rehabilitation

    No full text
    The last few years have seen significant advances in neuromotor rehabilitation technologies, such as robotics and virtual reality. Rehabilitation robotics primarily focuses on devices, control strategies, scenarios and protocols aimed at recovering sensory, motor and cognitive impairments often experienced by stroke victims. Remote rehabilitation can be adopted to relieve stress in healthcare facilities by limiting the movement of patients to clinics, mainly in the current COVID-19 pandemic. In this context, we have developed a remote controlled intelligent robot for elbow rehabilitation. The proposed system offers real-time monitoring and ultimately provides an electronic health record (EHR). Rehabilitation is an area of medical practice that treats patients with pain. However, this pain can prevent a person from positively interacting with therapy. To cope with this matter, the proposed solution incorporates a cascading fuzzy decision system to estimate patient pain. Indeed, as a safety measure, when the pain exceeds a certain threshold, the robot must stop the action even if the desired angle has not yet been reached. A fusion of sensors incorporating an electromyography (EMG) signal, feedback from the current sensor and feedback from the position encoder provides the fuzzy controller with the data needed to estimate pain. This measured pain is fed back into the control loop and processed to generate safe robot actions. The main contribution was to integrate vision-based gesture control, a cascade fuzzy logic-based decision system and IoT (Internet of Things) to help therapists remotely take care of patients efficiently and reliably. Tests carried out on three different subjects showed encouraging results

    Federated Learning-Inspired Technique for Attack Classification in IoT Networks

    No full text
    More than 10-billion physical items are being linked to the internet to conduct activities more independently and with less human involvement owing to the Internet of Things (IoT) technology. IoT networks are considered a source of identifiable data for vicious attackers to carry out criminal actions using automated processes. Machine learning (ML)-assisted methods for IoT security have gained much attention in recent years. However, the ML-training procedure incorporates large data which is transferable to the central server since data are created continually by IoT devices at the edge. In other words, conventional ML relies on a single server to store all of its data, which makes it a less desirable option for domains concerned about user privacy. The Federated Learning (FL)-based anomaly detection technique, which utilizes decentralized on-device data to identify IoT network intrusions, represents the proposed solution to the aforementioned problem. By exchanging updated weights with the centralized FL-server, the data are kept on local IoT devices while federating training cycles over GRUs (Gated Recurrent Units) models. The ensemble module of the technique assesses updates from several sources for improving the accuracy of the global ML technique. Experiments have shown that the proposed method surpasses the state-of-the-art techniques in protecting user data by registering enhanced performance measures of Statistical Analysis, Energy Efficiency, Memory Utilization, Attack Classification, and Client Accuracy Analysis for the identification of attacks

    Smart Cybersecurity Framework for IoT-Empowered Drones: Machine Learning Perspective

    No full text
    Drone advancements have ushered in new trends and possibilities in a variety of sectors, particularly for small-sized drones. Drones provide navigational interlocation services, which are made possible by the Internet of Things (IoT). Drone networks, on the other hand, are subject to privacy and security risks due to design flaws. To achieve the desired performance, it is necessary to create a protected network. The goal of the current study is to look at recent privacy and security concerns influencing the network of drones (NoD). The current research emphasizes the importance of a security-empowered drone network to prevent interception and intrusion. A hybrid ML technique of logistic regression and random forest is used for the purpose of classification of data instances for maximal efficacy. By incorporating sophisticated artificial-intelligence-inspired techniques into the framework of a NoD, the proposed technique mitigates cybersecurity vulnerabilities while making the NoD protected and secure. For validation purposes, the suggested technique is tested against a challenging dataset, registering enhanced performance results in terms of temporal efficacy (34.56 s), statistical measures (precision (97.68%), accuracy (98.58%), recall (98.59%), F-measure (99.01%), reliability (94.69%), and stability (0.73)

    SVM based Generative Adverserial networks for Federated learning and Edge Computing Attack Model and Outpoising

    No full text
    Machine learning algorithms are prone to attacks: An attackers can use the malicious nodes to attack the training dataset to manipulate the process of learning and reduce the efficiency of the algorithm working performance. Optimal poisoning attacks have already been proposed to evaluate worst case scenarios, modelling attacks as a bilevel optimization problem. Solving these problems is computationally demanding and has limited applicability for some models such as deep networks. In this paper we introduce a novel generative model to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i.e. samples that look like genuine data points but that reduce the accuracy of the classifier in the process of training process. The proposed system have 3 components of Generative Adverserial networks (GAN) generator, discriminator, and the target classifier. The proposed system allows to detect the vulnerability easy and it can be found as similar as realistic attacks to detect the area where the underlying data distribution have more possibility of poising attack which cause vulnerability to the network. Our experimentation, proves the claim our that the proposed model is effective on compromising the classifiers uses the machine learning algorithms and also deep learning networks
    corecore